7,389 research outputs found

    Reduced Scaling Hilbert Space Variational Monte Carlo

    Get PDF
    We show that for both single-Slater-Jastrow and Jastrow geminal power wave functions, the formal cost scaling of Hilbert space variational Monte Carlo can be reduced from fifth to fourth order in the system size, thus bringing it in line with the long-standing scaling of its real space counterpart. While traditional quantum chemistry methods can reduce costs related to the two-electron integral tensor through resolution of the identity and Cholesky decomposition approaches, we show that such approaches are ineffective in the presence of Hilbert space Jastrow factors. Instead, we develop a simple semi-stochastic approach that can take similar advantage of the near-sparsity of this four-index tensor. Through demonstrations on alkanes of increasing length, we show that accuracy and overall statistical uncertainty are not meaningfully affected and that a total cost crossover is reached as early as 50 electrons.Comment: 8 pages, 7 figure

    Structural Embedding of Syntactic Trees for Machine Comprehension

    Full text link
    Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods

    Optimal Entanglement Transformations Among N-qubit W-Class States

    Full text link
    We investigate the physically allowed probabilities for transforming one N-partite W-class state to another by means of local operations assisted with classical communication (LOCC). Recently, Kintas and Turgut have obtained an upper bound for the maximum probability of transforming two such states [arXiv:1003.2118v1]. Here, we provide a simple sufficient and necessary condition for when this upper bound can be satisfied and thus when optimality of state transformation can be achieved. Our discussion involves obtaining lower bounds for the transformation of arbitrary W-class states and showing precisely when this bound saturates the bound of [arXiv:1003.2118v1]. Finally, we consider the question of transforming symmetric W-class states and find that in general, the optimal one-shot procedure for converting two symmetric states requires a non-symmetric filter by all the parties

    Mass Dependence of Higgs Production at Large Transverse Momentum

    Full text link
    The transverse momentum distribution of the Higgs at large PTP_T is complicated by its dependence on three important energy scales: PTP_T, the top quark mass mtm_t, and the Higgs mass mHm_H. A strategy for simplifying the calculation of the cross section at large PTP_T is to calculate only the leading terms in its expansion in mt2/PT2m_t^2/P_T^2 and/or mH2/PT2m_H^2/P_T^2. The expansion of the cross section in inverse powers of PTP_T is complicated by logarithms of PTP_T and by mass singularities. In this paper, we consider the top-quark loop contribution to the subprocess qqˉ→H+gq\bar{q}\to H+g at leading order in αs\alpha_s. We show that the leading power of 1/PT21/P_T^2 can be expressed in the form of a factorization formula that separates the large scale PTP_T from the scale of the masses. All the dependence on mtm_t and mHm_H can be factorized into a distribution amplitude for ttˉt \bar t in the Higgs, a distribution amplitude for ttˉt \bar t in a real gluon, and an endpoint contribution. The factorization formula can be used to simplify calculations of the PTP_T distribution at large PTP_T to next-to-leading order in αs\alpha_s.Comment: 49 pages, 8 figure

    Cracking the Network Code: Four Principles for Grantmakers

    Get PDF
    As grantmakers and nonprofits are looking for ways to collaborate more effectively, many are experimenting working with and through networks to achieve greater impact. Because networks are by definition loosely controlled and emergent, understanding how to effectively support them feels like a mystery to many grantmakers.GEO's newest publication sets out to crack the code behind the network mystique. In fact, there is a method to working more efficiently and effectively through networks, and a critical first step for grantmakers is adopting a network mindset, which may require dramatic shifts in attitude and behavior for some. "Cracking the Network Code" outlines four principles that comprise the network mindset, illustrates the principles with a range of examples of networks that have achieved real results, and offers practical questions and recommendations to help grantmakers achieve the benefits and avoid common pitfalls of working through networks

    High-Performance Distributed ML at Scale through Parameter Server Consistency Models

    Full text link
    As Machine Learning (ML) applications increase in data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Unfortunately, effective use of clusters for ML requires considerable expertise in writing distributed code, while highly-abstracted frameworks like Hadoop have not, in practice, approached the performance seen in specialized ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML applications into distributed ones, while maintaining high throughput through relaxed "consistency models" that allow inconsistent parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoretically-motivated but undiscovered opportunities to maximize computational throughput. Motivated by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an "eager" PS communication mechanism, and implement it as a new PS system that enables ML algorithms to reach their solution more quickly.Comment: 19 pages, 2 figure
    • …
    corecore